Shares

In part I of this series I discussed clinical pathways – how clinicians approach problems and the role of diagnosis in this approach. In part II I discussed the thought processes involved in deciding which diagnostic tests are worth ordering.

In this post I will discuss some of the logical fallacies and heuristics that tend to bias and distort clinical reasoning. Many of these cognitive pitfalls apply to patients as well as clinicians.

Pattern recognition and data mining

Science, including the particular manifestation we like to call science-based medicine, is about using objective methods to determine which patterns in the world are really real, vs. those that just seem to be real. The dire need for scientific methodology partly results from the fact that humans have an overwhelming tendency to automatically sift through large amounts of data looking for patterns, and we are very good at detecting patterns, even those that are just random fluctuations in that data.

Many cognitive errors that plague any attempt to investigate the world (including clinical investigations) are some variation on the basic concept of mining large amounts of data in order to find apparent patterns, and then assuming the patterns are real or using confirmation bias to reinforce the perception that the pattern is real.

We have written many times about this concept as it applies to research – making multiple comparisons, sub-group analysis, publication bias, applying various statistical methods, poor meta-analysis, and blatantly cherry picking studies are all ways to pick signals out of the noise. But this happens in the clinical setting as well. In fact it is more of a problem in the clinical setting which is by necessity anecdotal and often cannot benefit from blinding or repetition.

One way to mine data is to order a large number of tests. The normal range of test outcomes is usually determined by taking two standard deviations on the Bell curve of results from a healthy population. This means that 95% of the results will be in this range, and 5% of tests in healthy individuals will be outside the range. Order a battery of 20 tests (like a chem-20 blood test) and on average one will be abnormal by chance alone.

This example is fairly obvious and most clinicians learn about it as a medical student. There are more subtle forms, however. In a patient with a difficult diagnosis it is common for them to go through multiple diagnostic procedures in series, until something abnormal is found. It is then tempting to conclude that the abnormal test is causally related to whatever symptoms were being investigated. It is also possible, however, that the clinician simply kept ordering tests until they got a false positive.

The same process applies to treatment. A clinician (or a patient seeking treatment from multiple clinicians) may try one treatment after another until they finally find one that works – or until their symptoms spontaneously improve in which case they will attribute that improvement to the last treatment they happened to try.

A partial solution to the above pitfalls is to order confirmatory tests to follow up the positive, or in the case of treatment to stop and then restart treatment to see if the beneficial effect goes away and then returns.

A great deal of data mining also occurs in the process of taking a history, or in the patient’s history itself. Patients, for example, will often search their memory for anything that may have caused their symptoms. They generally employ open-ended criteria (trying to think of anything interesting, rather than a specific cause), and underestimate the noise in their everyday lives.

The result is that it is almost always possible to think of something that happened, some exposure, some minor trauma, a stressor, a life change, a new environment – and then assume a causal relationship. Confirmation bias then kicks in to reinforce this belief, and that leads us to the next section.

Mechanisms of confirmation bias

Confirmation bias is a general term that refers to cognitive processes that tend to reinforce beliefs we already hold. This is mostly thought of as remembering hits (confirming information) and forgetting or dismissing misses (contradictory information).

The fallibility of human memory also aids tremendously in confirmation bias. Patients will not only differentially remember information that reinforces their narrative, their memory will become progressively contaminated and distorted to further reinforce it.

One example of this is anchoring. We have a poor memory for how long in the past an event happened. We tend to “telescope”, which means underestimating how long in the past an event was. When a patient says they had their symptom for 1 year, it is likely that it was really 2-3 years.

Patients, however, may also anchor one event in time to another event. This may be accurate and therefore helpful if they are anchoring to a public event that fixes their memory in time. Just as often, however, the anchoring is false, an artifact of their evolving narrative.

For example a patient may have mentally anchored the onset of their headaches to a minor car accident they had, because they have come to believe that the car accident is the ultimate cause of their current symptoms. This is not an unreasonable hypothesis, but the details of the patient’s memory over time will change to fit this story. After a year of telling their story to different doctors they may report that they were perfectly fine and then immediately after the accident all their symptoms began. Meanwhile their medical records may indicate that they were complaining of headaches a year prior to the accident.

That, of course, is the reason that clinicians need to keep obsessive records. They are a remedy to the vagaries of memory. This is also why anecdotal case reports are so unreliable.

Placebo effects also play into confirmation bias. A treatment that addresses a patient’s narrative is more likely to evoke a positive placebo response than one that does not, and then will be taken as confirmation that the narrative is correct.

The representativeness heuristic

There is a general cognitive tendency to estimate probabilities more based upon how well something fits the typical characteristics of a category than the base rate of that category.

The classic experiment presented subjects with a character profile of a college student – nerdy, good at math, likes computers, likes to spend time working alone – and then asked the subjects to estimate the probability that the student was an engineering major. Most people ranked the probability very high, because the profile was representative of a typical engineering student.

However, only 1% of students at the college are engineering students (the base rate), and when this is taken into consideration it is much more likely that the student is not in engineering.

This reasoning also applies to diagnosis. Clinicians tend to be highly confident in a diagnosis when the patient and their symptoms are highly representative of the typical presentation. This is reasonable as far as it goes, but is incomplete and therefore flawed reasoning. You also have to consider the base rate of the disease in question.

This results in what is called the zebra diagnosis. Medical students are famous for this fallacy – coming to the conclusion that a patient has a rare disease because they have some typical features. They have not yet learned through experience that rare things are rare, and common things are common. When you consider the base rate, even a typical presentation of a rare disease may not be the most likely diagnosis. An atypical presentation of a very common disease may be far more likely.

Rather than thinking in terms of representativeness, clinicians are better off thinking in terms of base rates and predictive value.  A symptom, for example, may be very typical and therefore representative of a particular illness (for example, fatigue is a typical presentation of chronic fatigue syndrome). But that symptom may also be present in the healthy population or with many other diseases. The presence of that symptom may therefore not be very predictive of the particular diagnosis.

Another way to look at this is that some symptoms are more specific to certain diagnoses, while others are non-specific. Every week I have patients come into my office with a list of symptoms they pulled off the internet, convinced that they have a rare or uncommon deadly disease. They are almost always wrong, because they have fallen for this cognitive error. They don’t have the background knowledge to know which symptoms are predictive and which are not, and they are also not familiar with the base rates of various diseases.

In this example another effect also comes into play – the Forer effect.  This refers to the tendency to take a general description and then apply it to oneself, finding examples that confirm the description. This not only applies to horoscopes and psychic readings, but lists of symptoms on the internet.

The representativeness heuristic manifests in many important yet subtle ways in clinical practice. For example, if you are a woman in the emergency room having a heart attack you are less likely to be properly diagnosed and treated than if you are a man with the same presentation. This is because we are biased to think of a middle-aged man as the typical heart attack patient – they are more representative.

The toupee fallacy

You may think you can always tell if someone is wearing a toupee, but the problem with this observation is that you have no idea when you don’t recognize when someone is wearing a toupee. This fallacy applies to diagnosis as well. When you look for a disease you may be likely to find it, but you have no idea how often the disease is present when you don’t look for it.

This is not completely true, as the patient may seek out a second opinion or you may refer to a specialist who does make the diagnosis. (This is why it is critical to give feedback to clinicians who initially missed a diagnosis.) Or the disease may progress and the diagnosis may declare itself.

For many benign, self-limiting, or chronic stable conditions, however, a proper diagnosis may either not be possible with current technology, or may never be made. A clinician can convince themselves, however, of uncanny diagnostic acumen if they only look at the positive outcomes and never look at the data systematically.

Related to this is the congruence bias – the tendency to only test your own hypothesis, and not competing hypotheses. In medicine this manifests when taking a history and ordering tests, which are a way of testing your clinical hypotheses.

If, for example, you suspect that poor sleep is a major contributor to a syndrome, such as headaches, you may ask all your patients with headache about the sleep quality and find that most have poor sleep. This will seem to confirm your hypothesis. However, if you also asked your patients without headaches you may find a similar rate of poor sleep.

Conclusion

Both patients and clinicians are subject to a long list of cognitive biases and errors in thinking that conspire to confirm beliefs about the cause and effective treatment of symptoms and illness. This is why, without the anchoring of adequate scientific methods, and a healthy dose of applied skepticism, practitioners will tend to slowly drift into a world of quackery.

Any treatment can seem to work, and any made up disease may seem to be real, when we rely only upon our naïve reasoning. The pages of SBM are full of dramatic examples.

Other entries in this series

Part I
Part II

Shares

Author

  • Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.

    View all posts

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.